22 research outputs found

    Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position

    Full text link
    A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309

    Neural Network Modeling of Sensory-Motor Control in Animals

    Full text link
    National Science Foundation (IRI 90-24877, IRI 87-16960); Air Force Office of Scientific Research (F49620-92-J-0499); Office of Naval Research (N00014-92-J-1309

    An Orientation Dependent Size Illusion Is Underpinned by Processing in the Extrastriate Visual Area, LO1

    Get PDF
    We use the simple, but prominent Helmholtz’s squares illusion in which a vertically striped square appears wider than a horizontally striped square of identical physical dimensions to determine whether functional magnetic resonance imaging (fMRI) BOLD responses in V1 underpin illusions of size. We report that these simple stimuli which differ in only one parameter, orientation, to which V1 neurons are highly selective elicited activity in V1 that followed their physical, not perceived size. To further probe the role of V1 in the illusion and investigate plausible extrastriate visual areas responsible for eliciting the Helmholtz squares illusion, we performed a follow-up transcranial magnetic stimulation (TMS) experiment in which we compared perceptual judgments about the aspect ratio of perceptually identical Helmholtz squares when no TMS was applied against selective stimulation of V1, LO1, or LO2. In agreement with fMRI results, we report that TMS of area V1 does not compromise the strength of the illusion. Only stimulation of area LO1, and not LO2, compromised significantly the strength of the illusion, consistent with previous research that LO1 plays a role in the processing of orientation information. These results demonstrate the involvement of a specific extrastriate area in an illusory percept of size

    The visual pigment xenopsin is widespread in protostome eyes and impacts the view on eye evolution

    Get PDF
    Photoreceptor cells in the eyes of Bilateria are often classified into microvillar cells with rhabdomeric opsin and ciliary cells with ciliary opsin, each type having specialized molecular components and physiology. First data on the recently discovered xenopsin point towards a more complex situation in protostomes. In this study, we provide clear evidence that xenopsin enters cilia in the eye of the larval bryozoan Tricellaria inopinata and triggers phototaxis. As reported from a mollusc, we find xenopsin coexpressed with rhabdomeric-opsin in eye photoreceptor cells bearing both microvilli and cilia in larva of the annelid Malacoceros fuliginosus. This is the first organism known to have both xenopsin and ciliary opsin, showing that these opsins are not necessarily mutually exclusive. Compiling existing data, we propose that xenopsin may play an important role in many protostome eyes and provides new insights into the function, evolution, and possible plasticity of animal eye photoreceptor cells.publishedVersio

    Baseline Shifts do not Predict Attentional Modulation of Target Processing During Feature-Based Visual Attention

    Get PDF
    Cues that direct selective attention to a spatial location have been observed to increase baseline neural activity in visual areas that represent a to-be-attended stimulus location. Analogous attention-related baseline shifts have also been observed in response to attention-directing cues for non-spatial stimulus features. It has been proposed that baseline shifts with preparatory attention may serve as the mechanism by which attention modulates the responses to subsequent visual targets that match the attended location or feature. Using functional MRI, we localized color- and motion-sensitive visual areas in individual subjects and investigated the relationship between cue-induced baseline shifts and the subsequent attentional modulation of task-relevant target stimuli. Although attention-directing cues often led to increased background neural activity in feature specific visual areas, these increases were not correlated with either behavior in the task or subsequent attentional modulation of the visual targets. These findings cast doubt on the hypothesis that attention-related shifts in baseline neural activity result in selective sensory processing of visual targets during feature-based selective attention

    Functional MRI studies into the neuroanatomical basis of eye movements

    Get PDF

    Functional MRI studies into the neuroanatomical basis of eye movements

    Get PDF

    Spatial Updating in Human Cortex

    Get PDF
    Single neurons in several cortical areas in monkeys update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. The central hypothesis here is that spatial updating also occurs in humans and that it can be visualized with functional MRI.In Chapter 2, we describe experiments in which we tested the role of human parietal cortex in spatial updating. We scanned subjects during a task that involved remapping of visual signals across hemifields. This task is directly analogous to the single-step saccade task used to test spatial updating in monkeys. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. Our results demonstrate that updating of visual information occurs in human parietal cortex and can be visualized with fMRI.The experiments in Chapter 2 show that updated visual responses have a characteristic latency and response shape. Chapter 3 describes a statistical model for estimating these parameters. The method is based on a nonlinear, fully Bayesian, hierarchical model that decomposes the fMRI time series data into baseline, smooth drift, activation signal, and noise. This chapter shows that this model performs well relative to commonly-used general linear models. In Chapter 4, we use the statistical method described in Chapter 3 to test for the presence of spatial updating activity in human extrastriate visual cortex. We identified the borders of several retinotopically defined visual areas in the occipital lobe. We then tested for spatial updating using the single step saccade task. We found a roughly monotonic relationship between the strength of updating activity and position in the visual area hierarchy. We observed the strongest responses in area V4, and the weakest response in V1. We conclude that updating is not restricted to brain regions involved primarily in attention and the generation of eye movements, but rather, is present in occipital lobe visual areas as well
    corecore